100 research outputs found

    Strength of Protection for Geographical Indications: Promotion Incentives and Welfare Effects

    Get PDF
    We address the question of how the strength of protection for geographical indications (GIs) affects the GI industry\u27s promotion incentives, equilibrium market outcomes, and the distribution of welfare. Geographical indication producers engage in informative advertising by associating their true quality premium (relative to a substitute product) with a specific label emphasizing the GI\u27s geographic origin. The extent to which the names/words of the GI label can be used and/or imitated by competing products—which depends on the strength of GI protection—determines how informative the GI promotion messages can be. Consumers’ heterogeneous preferences (vis-à-vis the GI quality premium) are modeled in a vertically differentiated framework. Both the GI industry and the substitute product industry are assumed to be competitive (with free entry). The model is calibrated and solved for alternative parameter values. Results show that producers of the GI and of the lower-quality substitute good have divergent interests: GI producers are better off with full protection, whereas the substitute good\u27s producers prefer intermediate levels of protection (but they never prefer zero protection because they benefit indirectly if the GI producers’ incentives to promote are preserved). For consumers and aggregate welfare, the preferred level of protection depends on the model\u27s parameters, with an intermediate level of protection being optimal in many circumstances

    The performance effects of creative imitation on original products: Evidence from lab and field experiments

    Get PDF
    Research Summary: A market entrant often challenges the incumbent using creative imitation: The entrant creatively combines imitated aspects of the original with its own innovative characteristics to create a distinct offering. Using lab and field experiments to examine creative imitation in China, we find the effects of creative imitations on the originals depend on the creative imitation's quality. We explore the underlying mechanisms, and show that including a low-quality creative imitation in the retail choice set increases satisfaction with and choice of the original, while a moderate-quality creative imitation does the opposite. Moreover, creative imitation affects consumers' satisfaction with the original by influencing whether their experience with the original verifies their expectations. Our paper reveals creative imitation effects to help incumbent firms effectively address them. Managerial Summary: When the incumbent is challenged by an entrant using creative imitation, consumers may react differently to the incumbent, and understanding consumers' reactions allows the incumbent to make better strategic decisions about how to address the challenge. Using lab and field experiments, we investigate creative imitations with two quality levels common in our empirical context, low quality and moderate quality, and examine how and why they differentially affect the originals. We find the presence of a low-quality creative imitation actually increased choice of the original by enhancing consumers' satisfaction with it, while a moderate-quality creative imitation reduced choice of the original by undermining satisfaction with it. Our research suggests the incumbent should address moderate-quality creative imitations' challenges to customer satisfaction, while temporarily tolerating low-quality creative imitations

    Rule-Based Forecasting: Using Judgment in Time-Series Extrapolation

    Get PDF
    Rule-Based Forecasting (RBF) is an expert system that uses judgment to develop and apply rules for combining extrapolations. The judgment comes from two sources, forecasting expertise and domain knowledge. Forecasting expertise is based on more than a half century of research. Domain knowledge is obtained in a structured way; one example of domain knowledge is managers= expectations about trends, which we call “causal forces.” Time series are described in terms of 28 conditions, which are used to assign weights to extrapolations. Empirical results on multiple sets of time series show that RBF produces more accurate forecasts than those from traditional extrapolation methods or equal-weights combined extrapolations. RBF is most useful when it is based on good domain knowledge, the domain knowledge is important, the series is well behaved (such that patterns can be identified), there is a strong trend in the data, and the forecast horizon is long. Under ideal conditions, the error for RBF’s forecasts were one-third less than those for equal-weights combining. When these conditions are absent, RBF neither improves nor harms forecast accuracy. Some of RBF’s rules can be used with traditional extrapolation procedures. In a series of studies, rules based on causal forces improved the selection of forecasting methods, the structuring of time series, and the assessment of prediction intervals

    A protein functionalization platform based on selective reactions at methionine residues.

    Get PDF
    Nature has a remarkable ability to carry out site-selective post-translational modification of proteins, therefore enabling a marked increase in their functional diversity1. Inspired by this, chemical tools have been developed for the synthetic manipulation of protein structure and function, and have become essential to the continued advancement of chemical biology, molecular biology and medicine. However, the number of chemical transformations that are suitable for effective protein functionalization is limited, because the stringent demands inherent to biological systems preclude the applicability of many potential processes2. These chemical transformations often need to be selective at a single site on a protein, proceed with very fast reaction rates, operate under biologically ambient conditions and should provide homogeneous products with near-perfect conversion2-7. Although many bioconjugation methods exist at cysteine, lysine and tyrosine, a method targeting a less-explored amino acid would considerably expand the protein functionalization toolbox. Here we report the development of a multifaceted approach to protein functionalization based on chemoselective labelling at methionine residues. By exploiting the electrophilic reactivity of a bespoke hypervalent iodine reagent, the S-Me group in the side chain of methionine can be targeted. The bioconjugation reaction is fast, selective, operates at low-micromolar concentrations and is complementary to existing bioconjugation strategies. Moreover, it produces a protein conjugate that is itself a high-energy intermediate with reactive properties and can serve as a platform for the development of secondary, visible-light-mediated bioorthogonal protein functionalization processes. The merger of these approaches provides a versatile platform for the development of distinct transformations that deliver information-rich protein conjugates directly from the native biomacromolecules

    Golden Rule of Forecasting: Be Conservative

    Get PDF
    This article proposes a unifying theory, or the Golden Rule, or forecasting. The Golden Rule of Forecasting is to be conservative. A conservative forecast is consistent with cumulative knowledge about the present and the past. To be conservative, forecasters must seek out and use all knowledge relevant to the problem, including knowledge of methods validated for the situation. Twenty-eight guidelines are logically deduced from the Golden Rule. A review of evidence identified 105 papers with experimental comparisons; 102 support the guidelines. Ignoring a single guideline increased forecast error by more than two-fifths on average. Ignoring the Golden Rule is likely to harm accuracy most when the situation is uncertain and complex, and when bias is likely. Non-experts who use the Golden Rule can identify dubious forecasts quickly and inexpensively. To date, ignorance of research findings, bias, sophisticated statistical procedures, and the proliferation of big data, have led forecasters to violate the Golden Rule. As a result, despite major advances in evidence-based forecasting methods, forecasting practice in many fields has failed to improve over the past half-century

    Extrapolation for Time-Series and Cross-Sectional Data

    Get PDF
    Extrapolation methods are reliable, objective, inexpensive, quick, and easily automated. As a result, they are widely used, especially for inventory and production forecasts, for operational planning for up to two years ahead, and for long-term forecasts in some situations, such as population forecasting. This paper provides principles for selecting and preparing data, making seasonal adjustments, extrapolating, assessing uncertainty, and identifying when to use extrapolation. The principles are based on received wisdom (i.e., experts’ commonly held opinions) and on empirical studies. Some of the more important principles are:• In selecting and preparing data, use all relevant data and adjust the data for important events that occurred in the past.• Make seasonal adjustments only when seasonal effects are expected and only if there is good evidence by which to measure them.• In extrapolating, use simple functional forms. Weight the most recent data heavily if there are small measurement errors, stable series, and short forecast horizons. Domain knowledge and forecasting expertise can help to select effective extrapolation procedures. When there is uncertainty, be conservative in forecasting trends. Update extrapolation models as new data are received.• To assess uncertainty, make empirical estimates to establish prediction intervals.• Use pure extrapolation when many forecasts are required, little is known about the situation, the situation is stable, and expert forecasts might be biased

    Selecting Forecasting Methods

    Get PDF
    I examined six ways of selecting forecasting methods: Convenience, “what’s easy,” is inexpensive, but risky. Market popularity, “what others do,” sounds appealing but is unlikely to be of value because popularity and success may not be related and because it overlooks some methods. Structured judgment, “what experts advise,” which is to rate methods against prespecified criteria, is promising. Statistical criteria, “what should work,” are widely used and valuable, but risky if applied narrowly. Relative track records, “what has worked in this situation,” are expensive because they depend on conducting evaluation studies. Guidelines from prior research, “what works in this type of situation,” relies on published research and offers a low-cost, effective approach to selection. Using a systematic review of prior research, I developed a flow chart to guide forecasters in selecting among ten forecasting methods. Some key findings: Given enough data, quantitative methods are more accurate than judgmental methods. When large changes are expected, causal methods are more accurate than naive methods. Simple methods are preferable to complex methods; they are easier to understand, less expensive, and seldom less accurate. To select a judgmental method, determine whether there are large changes, frequent forecasts, conflicts among decision makers, and policy considerations. To select a quantitative method, consider the level of knowledge about relationships, the amount of change involved, the type of data, the need for policy analysis, and the extent of domain knowledge. When selection is difficult, combine forecasts from different methods
    • …
    corecore